本文的目標是:用 Redis 快取減少對 Postgres 的讀取負載、降低延遲,同時維持合理的一致性保證。
結論:以 web 應用常見做法,建議採用 cache-aside,再搭配:
下面用節錄程式碼示範如何整合 Redis。
先新增一個 response DTO(src/models.rs ):
#[derive(Serialize, Deserialize, Clone)]
pub struct UserResponse {
pub id: i64,
pub username: String,
pub email: String,
pub created_at: OffsetDateTime,
pub updated_at: OffsetDateTime,
}
impl From<User> for UserResponse {
fn from(u: User) -> Self {
UserResponse {
id: u.id,
username: u.username,
email: u.email,
created_at: u.created_at,
updated_at: u.updated_at,
}
}
}
在 Cargo.toml 新增 redis:
在 main.rs 建立 Redis connection manager 並注入:
mod cache;
use redis::aio::MultiplexedConnection;
use redis::Client as RedisClient;
// 在 main() 中,建立 redis manager
let redis_url = env::var("REDIS_URL").unwrap_or_else(|_| "redis://127.0.0.1:6379/".to_string());
let redis_client = RedisClient::open(redis_url.as_str()).expect("invalid redis url");
let redis_conn: MultiplexedConnection = redis_client.get_multiplexed_tokio_connection().await.expect("redis connect fail");
// 將 redis_conn.clone() 放入 Extension alongside pool
let app = Router::new()
/* routes */
.layer(Extension(pool))
.layer(Extension(redis_conn));
建立一些快取 helper(可放在 src/cache.rs):
use redis::aio::MultiplexedConnection;
use redis::AsyncCommands;
use serde_json;
use crate::models::UserResponse;
use std::time::Duration;
const USER_CACHE_PREFIX: &str = "user:"; // key = user:{id}
const USER_CACHE_TTL_SECS: usize = 60 * 5; // 5 分鐘 TTL, 可調
pub async fn get_user_cache(
redis: &mut ConnectionManager,
id: i64,
) -> redis::RedisResult<Option<UserResponse>> {
let key = format!("{}{}", USER_CACHE_PREFIX, id);
let v: Option<String> = redis.get(&key).await?; // get returns Option<String>
match v {
Some(s) => {
let user: UserResponse = serde_json::from_str(&s).map_err(|e| {
// map serde error into redis::RedisError
redis::RedisError::from((redis::ErrorKind::TypeError, "serde json parse error"))
})?;
Ok(Some(user))
}
None => Ok(None),
}
}
pub async fn set_user_cache(
redis: &mut ConnectionManager,
id: i64,
user: &UserResponse,
) -> redis::RedisResult<()> {
let key = format!("{}{}", USER_CACHE_PREFIX, id);
let s = serde_json::to_string(user).map_err(|_| {
redis::RedisError::from((redis::ErrorKind::TypeError, "serde json serialize error"))
})?;
// SETEX: set with TTL
let _: () = redis.set_ex(key, s, USER_CACHE_TTL_SECS).await?;
Ok(())
}
pub async fn invalidate_user_cache(
redis: &mut ConnectionManager,
id: i64,
) -> redis::RedisResult<()> {
let key = format!("{}{}", USER_CACHE_PREFIX, id);
let _: () = redis.del(key).await?;
Ok(())
}
注意:這裡 get_user_cache / set_user_cache 都接收 &mut ConnectionManager。當你在 handler 中 extract Extension(redis_conn) 時,取得的是 clone 後的 ConnectionManager(可直接傳 mutable binding)。
修改 handler.rs(get_user)以使用快取:
use redis::aio::MultiplexedConnection;
use crate::cache::{get_user_cache, set_user_cache, invalidate_user_cache};
use crate::models::UserResponse;
pub async fn get_user(
Extension(pool): Extension<PgPool>,
Extension(mut redis): Extension<ConnectionManager>, // 注意:在 same layer 時要指定兩個 Extension 的順序,axum 會匹配
Path(id): Path<i64>,
) -> Result<impl IntoResponse, AppError> {
// 1) 嘗試從 Redis 取
if let Ok(Some(user_res)) = get_user_cache(&mut redis, id).await {
// 快取命中 — 直接回傳 (200)
return Ok((StatusCode::OK, Json(user_res)));
}
// 2) Cache miss -> 從 DB 讀
let user = sqlx::query_as::<_, User>(
r#"
SELECT id, username, email, password_hash, created_at, updated_at
FROM users
WHERE id = $1
"#,
)
.bind(id)
.fetch_optional(&pool)
.await
.map_err(|e| internal_err(e))?;
match user {
Some(u) => {
let resp: UserResponse = u.into();
// 3) 將結果寫進 Redis(忽略寫入錯誤,避免阻塞請求)
let mut redis_for_set = redis.clone();
let resp_clone = resp.clone();
// 非阻塞地嘗試 set cache:可 spawn 背景任務
tokio::spawn(async move {
let _ = set_user_cache(&mut redis_for_set, id, &resp_clone).await;
});
Ok((StatusCode::OK, Json(resp)))
}
None => {
Err((StatusCode::NOT_FOUND, format!("user {} not found", id)))
}
}
}
修改 create_user/update_user/delete_user 完成快取一致性:
// 轉成外部回傳用型別
let resp: UserResponse = rec.clone().into();
// 背景寫入快取(不阻塞回應)
// 注意:我們把 redis.clone() 給背景任務
let mut redis_for_set = redis.clone();
let user_resp = resp.clone();
tokio::spawn(async move {
let _ = set_user_cache(&mut redis_for_set, user_resp.id, &user_resp).await;
});
Ok((StatusCode::CREATED, Json(resp)))
let resp: UserResponse = updated.into();
let mut redis_for_set = redis.clone();
let resp_clone = resp.clone();
tokio::spawn(async move {
let _ = set_user_cache(&mut redis_for_set, resp.id, &resp_clone).await;
});
Ok((StatusCode::OK, Json(resp)))
let mut redis_for_del = redis.clone();
tokio::spawn(async move {
let _ = invalidate_user_cache(&mut redis_for_del, id).await;
});
上面用 tokio::spawn 非同步背景任務做 cache set/del,避免因 Redis 寫入錯誤或延遲而阻塞主要請求;但這樣會造成「DB 已改但 cache 更新失敗」的短暫不一致。若你需要更強一致性,可以在主程式中同步等待 set_user_cache 完成(但會增加延遲與失敗暴露面)。
最後補充幾點設計判斷: